4 research outputs found

    Data-Driven Model Reduction and Nonlinear Model Predictive Control of an Air Separation Unit by Applied Koopman Theory

    Full text link
    Achieving real-time capability is an essential prerequisite for the industrial implementation of nonlinear model predictive control (NMPC). Data-driven model reduction offers a way to obtain low-order control models from complex digital twins. In particular, data-driven approaches require little expert knowledge of the particular process and its model, and provide reduced models of a well-defined generic structure. Herein, we apply our recently proposed data-driven reduction strategy based on Koopman theory [Schulze et al. (2022), Comput. Chem. Eng.] to generate a low-order control model of an air separation unit (ASU). The reduced Koopman model combines autoencoders and linear latent dynamics and is constructed using machine learning. Further, we present an NMPC implementation that uses derivative computation tailored to the fixed block structure of reduced Koopman models. Our reduction approach with tailored NMPC implementation enables real-time NMPC of an ASU at an average CPU time decrease by 98 %

    A Recursively Recurrent Neural Network (R2N2) Architecture for Learning Iterative Algorithms

    Full text link
    Meta-learning of numerical algorithms for a given task consist of the data-driven identification and adaptation of an algorithmic structure and the associated hyperparameters. To limit the complexity of the meta-learning problem, neural architectures with a certain inductive bias towards favorable algorithmic structures can, and should, be used. We generalize our previously introduced Runge-Kutta neural network to a recursively recurrent neural network (R2N2) superstructure for the design of customized iterative algorithms. In contrast to off-the-shelf deep learning approaches, it features a distinct division into modules for generation of information and for the subsequent assembly of this information towards a solution. Local information in the form of a subspace is generated by subordinate, inner, iterations of recurrent function evaluations starting at the current outer iterate. The update to the next outer iterate is computed as a linear combination of these evaluations, reducing the residual in this space, and constitutes the output of the network. We demonstrate that regular training of the weight parameters inside the proposed superstructure on input/output data of various computational problem classes yields iterations similar to Krylov solvers for linear equation systems, Newton-Krylov solvers for nonlinear equation systems, and Runge-Kutta integrators for ordinary differential equations. Due to its modularity, the superstructure can be readily extended with functionalities needed to represent more general classes of iterative algorithms traditionally based on Taylor series expansions.Comment: manuscript (21 pages, 10 figures), supporting information (2 pages, 1 figure

    Deterministic Global Nonlinear Model Predictive Control with Neural Networks Embedded

    No full text
    Nonlinear model predictive control requires the solution of nonlinear programs with potentially multiple local solutions. Here, deterministic global optimization can guarantee to find a global optimum. However, its application is currently severely limited by computational cost and requires further developments in problem formulation, optimization solvers, and computing architectures. In this work, we propose a reduced-space formulation for the global optimization of problems with recurrent neural networks (RNN) embedded, based on our recent work on feed-forward artificial neural networks embedded. The method reduces the dimensionality of the optimization problem significantly, lowering the computational cost. We implement the NMPC problem in our open-source solver MAiNGO and solve it using parallel computing on 40 cores. We demonstrate real-time capability for the illustrative van de Vusse CSTR case study. We further propose two alternatives to reduce computational time: i) reformulate the RNN model by exposing a selected state variable to the optimizer; ii) replace the RNN with a neural multi-model. In our numerical case studies each proposal results in a reduction of computational time by an order of magnitude
    corecore